perm filename SCIENT.3[LET,JMC] blob
sn#880971 filedate 1990-01-02 generic text, type C, neo UTF8
COMMENT ā VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 \input jmclet[let,jmc]
C00012 ENDMK
Cā;
\input jmclet[let,jmc]
\jmclet
\address
Scientific American
415 Madison Avenue
New York, N.Y. 10017
\body
Sirs:
%searle[f89,jmc] Letter to Scientific American about Searle article
Perhaps it would be better for a computer scientist to
stay out of a ``debate'' among philosophers. However, in his
``Is the Brain's Mind a Computer Program'' [SCIENTIFIC AMERICAN,
January, 1990], John Searle has (perhaps inadvertently) made a
statement capable of being empirically tested. Moreover, it is
directly relevant to the main contention of his article.
Let's assume that the Chinese room rules are capable
of passing a Chinese Turing test, i.e. are not just trivial
responses based on the presence of key words in the text.
I favor the ``system reply'' and like Searle's
modified Chinese room in which the man has memorized the rules.
The man would then be acting the way computers do all the time---
running an interpreter program that is interpreting another
program. If humans ordinarily carried out several independent
mental processes, maybe we couldn't say Searle wrote that paper;
we might have to say that Searle3 wrote the paper, but Searle1 and Searle2
didn't agree with it and almost succeeded in preventing Searle3
from making the Searle body put it in the mail box. Likewise,
let Man1 be the usual personality and Man2 be the personality of
the process that Man1 is carrying out by executing the rules.
Man2 knows Chinese and Man1 doesn't.
Here's where Searle has made a commitment about the real
world.
His axiom 3 is:
{\it ``Syntax by itself is neither constitutive of
nor sufficient for semantics.''}
The statement is vague, because what he means by
``semantics'' and what he means by ``sufficient for'' is unclear.
However, he illustrates it in the following paragraph.
{\it ``Fine. But now imagine that as I am sitting in the Chinese room
shuffling the Chinese symbols, I get bored with shuffling the---to
me---meaningless symbols. So, suppose that I decide to interpret
the symbols as standing for moves in a chess game. Which semantics
is the system giving off now? Is it giving off a Chinese semantics
or a chess semantics, or both simultaneously? Suppose there is a
third person looking in through the window, and she decides that
the symbol manipulations can all be interpreted as stock-market
predictions. And so on. There is no limit to the number of
interpretations that can be assigned to the symbols because,
to repeat, the symbols are purely formal. They have no
intrinsic semantics.''}
This claims that a Chinese text reporting a
conversation could be a report of a chess game or a stock-market
prediction in some other language. Experience with cryptography,
the statistical theory of communication and the theory of
algorithmic complexity show that the claim is untrue for texts of
significant length.
Cryptography first. If Searle were right, cryptograms
would often have ambiguous solutions. In fact simple substitution
ciphers in English generally have unique solutions if they are
longer than 20 letters. Chinese would have a longer {\it unicity
distance} but surely not longer than a few hundred letters.
Here's approximately what happens. Consider a Chinese
sentence involving the character for dog. Suppose someone
proposes to interpret the sentence as a stock market prediction
with that character meaning IBM. In order that the sentence make
sense, he has to come up with interpretations for the other
characters. Suppose he succeeds. The next time the character
for dog appears, it again has to mean IBM, but the
interpretations of the other characters are also partially
constrained by the first sentence. Very soon the alternative
interpretation breaks down. Of course, if a word appears only
once, it will be possible to find a different
interpretation for it, but such tricks won't suffice to get
a prescribed interpretation for the whole text.
Statistical communication theory shows how to measure the
information ccntent of natural languages. For written English,
it's about one bit per letter. Judging from the length of signs,
it's a little less for most other European languages.
I don't know about Chinese, but signs in Chinese are slightly
shorter than English signs, but the amount of detail per square
inch is larger.
The redundancy of natural languages seems to be necessary
in order to permit partial texts to be understood as they are read
or heard.
Algorithmic complexity theory (treated by Gregory Chaitin
in a Scientific American article) tells us that it is indeed possible
to have a language in which the Chinese conversation has the interpretation
of a stock-market prediction. However, it will be a very complicated
language, and its description will almost certainly be as long as
the text to be interpreted. As the Chinese conversation goes on,
the length of the rules interpreting it as a stock-market prediction
will grow. The resulting languages will be too complex to be learned
by people, let alone two-year old children.
This isn't just an accidental phenomenon associated with
Searle's fantasy, but is an essential part of the relation between
syntax and semantics. The fact that the syntax of natural languages
do not generate texts with many interpretations is part of what allows
a child to learn its native language. If Searle were right, what a
child hears and the gestures it sees would admit equally good
alternative interpretations.
Searle's article lends itself to a concrete challenge.
Take a page of Chinese conversation from an existing novel.
Devise a pseudo-Chinese language in which the page is interpreted
as a stock market prediction.
Building a ``Chinese room'' computer program
is beyond the present state of artificial intelligence. Doing
it would involve formalizing large amounts of world knowledge and
linguistic knowledge. Then will be the time for the philosophers
to argue about whether the program ``really knows'' the knowledge
we have put into it.
\closing
Sincerely,
John McCarthy
\endletter
\vfill\end